435 research outputs found

    Processing asymmetry of transitions between order and disorder in human auditory cortex

    Get PDF
    Purpose: To develop an algorithm to resolve intrinsic problems with dose calculations using pencil beams when particles involved in each beam are overreaching a lateral density interface or when they are detouring in a laterally heterogeneous medium. Method and Materials: A finding on a Gaussian distribution, such that it can be approximately decomposed into multiple narrower, shifted, and scaled ones, was applied to dynamic splitting of pencil beams implemented in a dose calculation algorithm for proton and ion beams. The method was tested in an experiment with a range-compensated carbon-ion beam. Its effectiveness and efficiency were evaluated for carbon-ion and proton beams in a heterogeneous phantom model. Results: The splitting dose calculation reproduced the detour effect observed in the experiment, which amounted to about 10% at a maximum or as large as the lateral particle-disequilibrium effect. The proton-beam dose generally showed large scattering effects including the overreach and detour effects. The overall computational times were 9 s and 45 s for non-splitting and splitting carbon-ion beams and 15 s and 66 s for non-splitting and splitting proton beams. Conclusions: The beam-splitting method was developed and verified to resolve the intrinsic size limitation of the Gaussian pencil-beam model in dose calculation algorithms. The computational speed slowed down by factor of 5, which would be tolerable for dose accuracy improvement at a maximum of 10%, in our test case.AAPM Annual Meeting 200

    Human Auditory cortical processing of changes in interaural correlation

    Get PDF
    Sensitivity to the similarity of the acoustic waveforms at the two ears, and specifically to changes in similarity, is crucial to auditory scene analysis and extraction of objects from background. Here, we use the high temporal resolution of magnetoencephalography to investigate the dynamics of cortical processing of changes in interaural correlation, a measure of interaural similarity, and compare them with behavior. Stimuli are interaurally correlated or uncorrelated wideband noise, immediately followed by the same noise with intermediate degrees of interaural correlation. Behaviorally, listeners' sensitivity to changes in interaural correlation is asymmetrical. Listeners are faster and better at detecting transitions from correlated noise than transitions from uncorrelated noise. The cortical response to the change in correlation is characterized by an activation sequence starting from ∼50 ms after change. The strength of this response parallels behavioral performance: auditory cortical mechanisms are much less sensitive to transitions from uncorrelated noise than from correlated noise. In each case, sensitivity increases with interaural correlation difference. Brain responses to transitions from uncorrelated noise lag those from correlated noise by ∼80 ms, which may be the neural correlate of the observed behavioral response time differences. Importantly, we demonstrate differences in location and time course of neural processing: transitions from correlated noise are processed by a distinct neural population, and with greater speed, than transitions from uncorrelated noise

    Acoustically driven cortical delta oscillations underpin prosodic chunking

    Get PDF
    Oscillation-based models of speech perception postulate a cortical computational principle by which decoding is performed within a window structure derived by a segmentation process. Segmentation of syllable-size chunks is realized by a theta oscillator. We provide evidence for an analogous role of a delta oscillator in the segmentation of phrase-sized chunks. We recorded Magnetoencephalography (MEG) in humans, while participants performed a target identification task. Random-digit strings, with phrase-long chunks of two digits, were presented at chunk rates of 1.8 Hz or 2.6 Hz, inside or outside the delta frequency band (defined here to be 0.5 - 2 Hz). Strong periodicities were elicited by chunk rates inside of delta in superior, middle temporal areas and speech-motor integration areas. Periodicities were diminished or absent for chunk rates outside delta, in line with behavioral performance. Our findings show that prosodic chunking of phrase-sized acoustic segments is correlated with acoustic-driven delta oscillations, expressing anatomically specific patterns of neuronal periodicities

    The impact of phase entrainment on auditory detection is highly variable: Revisiting a key finding

    Get PDF
    Ample evidence shows that the human brain carefully tracks acoustic temporal regularities in the input, perhaps by entraining cortical neural oscillations to the rate of the stimulation. To what extent the entrained oscillatory activity influences processing of upcoming auditory events remains debated. Here, we revisit a critical finding from Hickok et al. (2015) that demonstrated a clear impact of auditory entrainment on subsequent auditory detection. Participants were asked to detect tones embedded in stationary noise, following a noise that was amplitude modulated at 3 Hz. Tonal targets occurred at various phases relative to the preceding noise modulation. The original study (N = 5) showed that the detectability of the tones (presented at near-threshold intensity) fluctuated cyclically at the same rate as the preceding noise modulation. We conducted an exact replication of the original paradigm (N = 23) and a conceptual replication using a shorter experimental procedure (N = 24). Neither experiment revealed significant entrainment effects at the group level. A restricted analysis on the subset of participants (36%) who did show the entrainment effect revealed no consistent phase alignment between detection facilitation and the preceding rhythmic modulation. Interestingly, both experiments showed group-wide presence of a non-cyclic behavioural pattern, wherein participants' detection of the tonal targets was lower at early and late time points of the target period. The two experiments highlight both the sensitivity of the task to elicit oscillatory entrainment and the striking individual variability in performance

    Two attentive strategies reducing subjective distortions in serial duration perception

    Get PDF
    Humans tend to perceptually distort (dilate/shrink) the duration of brief stimuli presented in a sequence when discriminating the duration of a second stimulus (Comparison) from the duration of a first stimulus (Standard). This type of distortion, termed “Time order error” (TOE), is an important window into the determinants of subjective perception. We hypothesized that stimulus durations would be optimally processed, suppressing subjective distortions in serial perception, if the events to be compared fell within the boundaries of rhythmic attentive sampling (4–8 Hz, theta band). We used a two-interval forced choice (2IFC) experimental design, and in three separate experiments tested different Standard durations: 120-ms, corresponding to an 8.33 Hz rhythmic attentive window; 160 ms, corresponding to a 6.25 Hz window; and 200 ms, for a 5 Hz window. We found that TOE, as measured by the Constant Error metric, is sizeable for a 120-ms Standard, is reduced for a 160-ms Standard, and statistically disappears for 200-ms Standard events, confirming our hypothesis. For 120- and 160-ms Standard events, to reduce TOEs it was necessary to increase the interval between the Standard and the Comparison event from sub-second (400, 800 ms) to supra-second (1600, 2000 ms) lags, suggesting that the orienting of attention in time waiting for the Comparison event to onset may work as a back-up strategy to optimize its encoding. Our results highlight the flexible use of two different attentive strategies to optimize subjective time perception

    Modulation spectra capture EEG responses to speech signals and drive distinct temporal response functions

    Get PDF
    Speech signals have a unique shape of long-term modulation spectrum that is distinct from environmental noise, music, and non-speech vocalizations. Does the human auditory system adapt to the speech long-term modulation spectrum and efficiently extract critical information from speech signals? To answer this question, we tested whether neural responses to speech signals can be captured by specific modulation spectra of non-speech acoustic stimuli. We generated amplitude modulated (AM) noise with the speech modulation spectrum and 1/f modulation spectra of different exponents to imitate temporal dynamics of different natural sounds. We presented these AM stimuli and a 10-min piece of natural speech to 19 human participants undergoing electroencephalography (EEG) recording. We derived temporal response functions (TRFs) to the AM stimuli of different spectrum shapes and found distinct neural dynamics for each type of TRFs. We then used the TRFs of AM stimuli to predict neural responses to the speech signals, and found that (1) the TRFs of AM modulation spectra of exponents 1, 1.5, and 2 preferably captured EEG responses to speech signals in the δ band and (2) the θ neural band of speech neural responses can be captured by the AM stimuli of an exponent of 0.75. Our results suggest that the human auditory system shows specificity to the long-term modulation spectrum and is equipped with characteristic neural algorithms tailored to extract critical acoustic information from speech signals

    The paradoxical role of emotional intensity in the perception of vocal affect

    Get PDF
    Vocalizations including laughter, cries, moans, or screams constitute a potent source of information about the affective states of others. It is typically conjectured that the higher the intensity of the expressed emotion, the better the classification of affective information. However, attempts to map the relation between affective intensity and inferred meaning are controversial. Based on a newly developed stimulus database of carefully validated non-speech expressions ranging across the entire intensity spectrum from low to peak, we show that the intuition is false. Based on three experiments (N = 90), we demonstrate that intensity in fact has a paradoxical role. Participants were asked to rate and classify the authenticity, intensity and emotion, as well as valence and arousal of the wide range of vocalizations. Listeners are clearly able to infer expressed intensity and arousal; in contrast, and surprisingly, emotion category and valence have a perceptual sweet spot: moderate and strong emotions are clearly categorized, but peak emotions are maximally ambiguous. This finding, which converges with related observations from visual experiments, raises interesting theoretical challenges for the emotion communication literature

    Effects of part- and whole-object primes on early MEG responses to Mooney faces and houses

    Get PDF
    Results from neurophysiological experiments suggest that face recognition engages a sensitive mechanism that is reflected in increased amplitude and decreased latency of the MEG M170 response compared to non-face visual targets. Furthermore, whereas recognition of objects (e.g., houses) has been argued to be based on individual features (e.g., door, window), face recognition may depend more on holistic information. Here we analyzed priming effects of component and holistic primes on 20 participants' early MEG responses to two-tone (Mooney) images to determine whether face recognition in this context engages “featural” or “configural” processing. Although visually underspecified, the Mooney images in this study elicited M170 responses that replicate the typical face vs. house effect. However, we found a distinction between holistic vs. component primes that modulated this effect dependent upon compatibility (match) between the prime and target. The facilitatory effect of holistic faces and houses for Mooney faces and houses, respectively, suggests that both Mooney face and house recognition—both low spatial frequency stimuli—are based on holistic information

    The anticipation of events in time

    No full text
    Humans anticipate events signaled by sensory cues. It is commonly assumed that two uncertainty parameters modulate the brain's capacity to predict: the hazard rate (HR) of event probability and the uncertainty in time estimation which increases with elapsed time. We investigate both assumptions by presenting event probability density functions (PDFs) in each of three sensory modalities. We show that perceptual systems use the reciprocal PDF and not the HR to model event probability density. We also demonstrate that temporal uncertainty does not necessarily grow with elapsed time but can also diminish, depending on the event PDF. Previous research identified neuronal activity related to event probability in multiple levels of the cortical hierarchy (sensory (V4), association (LIP), motor and other areas) proposing the HR as an elementary neuronal computation. Our results—consistent across vision, audition, and somatosensation—suggest that the neurobiological implementation of event anticipation is based on a different, simpler and more stable computation than HR: the reciprocal PDF of events in time
    corecore